Vatican City
How Christian Leaders Are Challenging the AI Boom
Pope Leo XIV made his first address to the College of Cardinals on May 10, 2025 in Vatican City, and touched upon the rise of artificial intelligence. Pope Leo XIV made his first address to the College of Cardinals on May 10, 2025 in Vatican City, and touched upon the rise of artificial intelligence. As technologists race to accelerate AI's progress with minimal guardrails, they are being met with increasing resistance from a powerful global contingent: Christian leaders and their congregations. Christians are not a monolith by any means. But this year, Christian leaders across sects--including Catholics, Evangelicals, and Baptists--sounded the alarm on AI's potential impact on family, human relationships, labor, and the church itself.
- Europe > Holy See > Vatican City (0.45)
- North America > United States > Texas (0.05)
- North America > United States > California (0.05)
- (2 more...)
- Law (0.72)
- Government > Regional Government > North America Government > United States Government (0.70)
Sensitivity of Small Language Models to Fine-tuning Data Contamination
Scaria, Nicy, Kennedy, Silvester John Joseph, Subramani, Deepak
Small Language Models (SLMs) are increasingly being deployed in resource-constrained environments, yet their behavioral robustness to data contamination during instruction tuning remains poorly understood. We systematically investigate the contamination sensitivity of 23 SLMs (270M to 4B parameters) across multiple model families by measuring susceptibility to syntactic and semantic transformation types during instruction tuning: syntactic transformations (character and word reversal) and semantic transformations (irrelevant and counterfactual responses), each applied at contamination levels of 25\%, 50\%, 75\%, and 100\%. Our results reveal fundamental asymmetries in vulnerability patterns: syntactic transformations cause catastrophic performance degradation, with character reversal producing near-complete failure across all models regardless of size or family, while semantic transformations demonstrate distinct threshold behaviors and greater resilience in core linguistic capabilities. Critically, we discover a ``\textit{capability curse}" where larger, more capable models become more susceptible to learning semantic corruptions, effectively following harmful instructions more readily, while our analysis of base versus instruction-tuned variants reveals that alignment provides inconsistent robustness benefits, sometimes even reducing resilience. Our work establishes three core contributions: (1) empirical evidence of SLMs' disproportionate vulnerability to syntactic pattern contamination, (2) identification of asymmetric sensitivity patterns between syntactic and semantic transformations, and (3) systematic evaluation protocols for contamination robustness assessment. These findings have immediate deployment implications, suggesting that current robustness assumptions may not hold for smaller models and highlighting the need for contamination-aware training protocols.
- Europe > France (0.04)
- North America > Canada (0.04)
- Europe > San Marino (0.04)
- (8 more...)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.48)
AI Diffusion in Low Resource Language Countries
Misra, Amit, Zamir, Syed Waqas, Hamidouche, Wassim, Becker-Reshef, Inbal, Ferres, Juan Lavista
Artificial intelligence (AI) is diffusing globally at unprecedented speed, but adoption remains uneven. Frontier Large Language Models (LLMs) are known to perform poorly on low-resource languages due to data scarcity. We hypothesize that this performance deficit reduces the utility of AI, thereby slowing adoption in Low-Resource Language Countries (LRLCs). To test this, we use a weighted regression model to isolate the language effect from socioeconomic and demographic factors, finding that LRLCs have a share of AI users that is approximately 20% lower relative to their baseline. These results indicate that linguistic accessibility is a significant, independent barrier to equitable AI diffusion.
- North America > The Bahamas (0.14)
- North America > United States > District of Columbia > Washington (0.05)
- South America > Venezuela (0.04)
- (186 more...)
Catholic clergy sex abuse survivors hopeful after Pope Leo meeting
Survivors of sex abuse by members of the Catholic clergy have expressed hope after meeting Pope Leo at the Vatican for the first time. Gemma Hickey, board president of Ending Clergy Abuse (ECA Global), told the BBC it spoke volumes he had met them so soon in his papacy. The group is pushing for a global zero-tolerance policy, already adopted in the US, of permanently removing a priest who admits or is proven to have sexually abused a child. The Pope acknowledged there was resistance in some parts of the world to this, Hickey said. The new Pope, who assumed the role in May, has inherited the issue, which has haunted the Catholic Church for decades and the Vatican has struggled to root out.
- North America > United States (0.69)
- South America (0.15)
- North America > Central America (0.15)
- (15 more...)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.27)
- Europe > France (0.14)
- Europe > Germany (0.14)
- (100 more...)
- Research Report > New Finding (1.00)
- Personal > Honors (1.00)
- Transportation > Air (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (22 more...)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Oceania > Australia > South Australia > Adelaide (0.14)
- (50 more...)
- Research Report > Experimental Study (1.00)
- Personal > Honors (0.67)
- Overview (0.67)
- Media > Television (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (22 more...)
On the Role of Unobserved Sequences on Sample-based Uncertainty Quantification for LLMs
Kunitomo-Jacquin, Lucie, Marrese-Taylor, Edison, Fukuda, Ken
Quantifying uncertainty in large language models (LLMs) is important for safety-critical applications because it helps spot incorrect answers, known as hallucinations. One major trend of uncertainty quantification methods is based on estimating the entropy of the distribution of the LLM's potential output sequences. This estimation is based on a set of output sequences and associated probabilities obtained by querying the LLM several times. In this paper, we advocate and experimentally show that the probability of unobserved sequences plays a crucial role, and we recommend future research to integrate it to enhance such LLM uncertainty quantification methods.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.15)
- Europe > Holy See > Vatican City (0.06)
- North America > Mexico > Mexico City > Mexico City (0.04)
- North America > Canada (0.04)
Evaluating Large Language Models for IUCN Red List Species Information
Large Language Models (LLMs) are rapidly being adopted in conservation to address the biodiversity crisis, yet their reliability for species evaluation is uncertain. This study systematically validates five leading models on 21,955 species across four core IUCN Red List assessment components: taxonomy, conservation status, distribution, and threats. A critical paradox was revealed: models excelled at taxonomic classification (94.9%) but consistently failed at conservation reasoning (27.2% for status assessment). This knowledge-reasoning gap, evident across all models, suggests inherent architectural constraints, not just data limitations. Furthermore, models exhibited systematic biases favoring charismatic vertebrates, potentially amplifying existing conservation inequities. These findings delineate clear boundaries for responsible LLM deployment: they are powerful tools for information retrieval but require human oversight for judgment-based decisions. A hybrid approach is recommended, where LLMs augment expert capacity while human experts retain sole authority over risk assessment and policy.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.93)
Mechanistic Interpretability with SAEs: Probing Religion, Violence, and Geography in Large Language Models
Simbeck, Katharina, Mahran, Mariam
Despite growing research on bias in large language models (LLMs), most work has focused on gender and race, with little attention to religious identity. This paper explores how religion is internally represented in LLMs and how it intersects with concepts of violence and geography. Using mechanistic interpretability and Sparse Autoencoders (SAEs) via the Neuronpedia API, we analyze latent feature activations across five models. We measure overlap between religion- and violence-related prompts and probe semantic patterns in activation contexts. While all five religions show comparable internal cohesion, Islam is more frequently linked to features associated with violent language. In contrast, geographic associations largely reflect real-world religious demographics, revealing how models embed both factual distributions and cultural stereotypes. These findings highlight the value of structural analysis in auditing not just outputs but also internal representations that shape model behavior.
- North America > United States > New York > New York County > New York City (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.14)
- (225 more...)
GENUINE: Graph Enhanced Multi-level Uncertainty Estimation for Large Language Models
Wang, Tuo, Kulkarni, Adithya, Cody, Tyler, Beling, Peter A., Yan, Yujun, Zhou, Dawei
Uncertainty estimation is essential for enhancing the reliability of Large Language Models (LLMs), particularly in high-stakes applications. Existing methods often overlook semantic dependencies, relying on token-level probability measures that fail to capture structural relationships within the generated text. We propose GENUINE: Graph ENhanced mUlti-level uncertaINty Estimation for Large Language Models, a structure-aware framework that leverages dependency parse trees and hierarchical graph pooling to refine uncertainty quantification. By incorporating supervised learning, GENUINE effectively models semantic and structural relationships, improving confidence assessments. Extensive experiments across NLP tasks show that GENUINE achieves up to 29% higher AUROC than semantic entropy-based approaches and reduces calibration errors by over 15%, demonstrating the effectiveness of graph-based uncertainty modeling. The code is available at https://github.com/ODYSSEYWT/GUQ.
- Europe > Austria > Vienna (0.14)
- South America > Colombia > Bogotá D.C. > Bogotá (0.04)
- Asia > Singapore (0.04)
- (14 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Grammars & Parsing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.96)